18 research outputs found

    Vision-Based navigation system for unmanned aerial vehicles

    Get PDF
    Mención Internacional en el título de doctorThe main objective of this dissertation is to provide Unmanned Aerial Vehicles (UAVs) with a robust navigation system; in order to allow the UAVs to perform complex tasks autonomously and in real-time. The proposed algorithms deal with solving the navigation problem for outdoor as well as indoor environments, mainly based on visual information that is captured by monocular cameras. In addition, this dissertation presents the advantages of using the visual sensors as the main source of data, or complementing other sensors in providing useful information; in order to improve the accuracy and the robustness of the sensing purposes. The dissertation mainly covers several research topics based on computer vision techniques: (I) Pose Estimation, to provide a solution for estimating the 6D pose of the UAV. This algorithm is based on the combination of SIFT detector and FREAK descriptor; which maintains the performance of the feature points matching and decreases the computational time. Thereafter, the pose estimation problem is solved based on the decomposition of the world-to-frame and frame-to-frame homographies. (II) Obstacle Detection and Collision Avoidance, in which, the UAV is able to sense and detect the frontal obstacles that are situated in its path. The detection algorithm mimics the human behaviors for detecting the approaching obstacles; by analyzing the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. Then, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, the algorithm extracts the collision-free zones around the obstacle, and combining with the tracked waypoints, the UAV performs the avoidance maneuver. (III) Navigation Guidance, which generates the waypoints to determine the flight path based on environment and the situated obstacles. Then provide a strategy to follow the path segments and in an efficient way and perform the flight maneuver smoothly. (IV) Visual Servoing, to offer different control solutions (Fuzzy Logic Control (FLC) and PID), based on the obtained visual information; in order to achieve the flight stability as well as to perform the correct maneuver; to avoid the possible collisions and track the waypoints. All the proposed algorithms have been verified with real flights in both indoor and outdoor environments, taking into consideration the visual conditions; such as illumination and textures. The obtained results have been validated against other systems; such as VICON motion capture system, DGPS in the case of pose estimate algorithm. In addition, the proposed algorithms have been compared with several previous works in the state of the art, and are results proves the improvement in the accuracy and the robustness of the proposed algorithms. Finally, this dissertation concludes that the visual sensors have the advantages of lightweight and low consumption and provide reliable information, which is considered as a powerful tool in the navigation systems to increase the autonomy of the UAVs for real-world applications.El objetivo principal de esta tesis es proporcionar Vehiculos Aereos no Tripulados (UAVs) con un sistema de navegacion robusto, para permitir a los UAVs realizar tareas complejas de forma autonoma y en tiempo real. Los algoritmos propuestos tratan de resolver problemas de la navegacion tanto en ambientes interiores como al aire libre basandose principalmente en la informacion visual captada por las camaras monoculares. Ademas, esta tesis doctoral presenta la ventaja de usar sensores visuales bien como fuente principal de datos o complementando a otros sensores en el suministro de informacion util, con el fin de mejorar la precision y la robustez de los procesos de deteccion. La tesis cubre, principalmente, varios temas de investigacion basados en tecnicas de vision por computador: (I) Estimacion de la Posicion y la Orientacion (Pose), para proporcionar una solucion a la estimacion de la posicion y orientacion en 6D del UAV. Este algoritmo se basa en la combinacion del detector SIFT y el descriptor FREAK, que mantiene el desempeno del a funcion de puntos de coincidencia y disminuye el tiempo computacional. De esta manera, se soluciona el problema de la estimacion de la posicion basandose en la descomposicion de las homografias mundo a imagen e imagen a imagen. (II) Deteccion obstaculos y elusion colisiones, donde el UAV es capaz de percibir y detectar los obstaculos frontales que se encuentran en su camino. El algoritmo de deteccion imita comportamientos humanos para detectar los obstaculos que se acercan, mediante el analisis de la magnitud del cambio de los puntos caracteristicos detectados de referencia, combinado con los ratios de expansion de los contornos convexos construidos alrededor de los puntos caracteristicos detectados en frames consecutivos. A continuacion, comparando la proporcion del area del obstaculo y la posicion del UAV, el metodo decide si el obstaculo detectado puede provocar una colision. Por ultimo, el algoritmo extrae las zonas libres de colision alrededor del obstaculo y combinandolo con los puntos de referencia, elUAV realiza la maniobra de evasion. (III) Guiado de navegacion, que genera los puntos de referencia para determinar la trayectoria de vuelo basada en el entorno y en los obstaculos detectados que encuentra. Proporciona una estrategia para seguir los segmentos del trazado de una manera eficiente y realizar la maniobra de vuelo con suavidad. (IV) Guiado por Vision, para ofrecer soluciones de control diferentes (Control de Logica Fuzzy (FLC) y PID), basados en la informacion visual obtenida con el fin de lograr la estabilidad de vuelo, asi como realizar la maniobra correcta para evitar posibles colisiones y seguir los puntos de referencia. Todos los algoritmos propuestos han sido verificados con vuelos reales en ambientes exteriores e interiores, tomando en consideracion condiciones visuales como la iluminacion y las texturas. Los resultados obtenidos han sido validados con otros sistemas: como el sistema de captura de movimiento VICON y DGPS en el caso del algoritmo de estimacion de la posicion y orientacion. Ademas, los algoritmos propuestos han sido comparados con trabajos anteriores recogidos en el estado del arte con resultados que demuestran una mejora de la precision y la robustez de los algoritmos propuestos. Esta tesis doctoral concluye que los sensores visuales tienen las ventajes de tener un peso ligero y un bajo consumo y, proporcionar informacion fiable, lo cual lo hace una poderosa herramienta en los sistemas de navegacion para aumentar la autonomia de los UAVs en aplicaciones del mundo real.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlo Regazzoni.- Secretario: Fernando García Fernández.- Vocal: Pascual Campoy Cerver

    High-accuracy patternless calibration of multiple 3D LiDARs for autonomous vehicles

    Get PDF
    This article proposes a new method for estimating the extrinsic calibration parameters between any pair of multibeam LiDAR sensors on a vehicle. Unlike many state-of-the-art works, this method does not use any calibration pattern or reflective marks placed in the environment to perform the calibration; in addition, the sensors do not need to have overlapping fields of view. An iterative closest point (ICP)-based process is used to determine the values of the calibration parameters, resulting in better convergence and improved accuracy. Furthermore, a setup based on the car learning to act (CARLA) simulator is introduced to evaluate the approach, enabling quantitative assessment with ground-truth data. The results show an accuracy comparable with other approaches that require more complex procedures and have a more restricted range of applicable setups. This work also provides qualitative results on a real setup, where the alignment between the different point clouds can be visually checked. The open-source code is available at https://github.com/midemig/pcd_calib .This work was supported in part by the Madrid Government (Comunidad de Madrid-Spain) under the Multiannual Agreement with UC3M ("Fostering Young Doctors Research," APBI-CM-UC3M) in the context of the V PRICIT (Research and Technological Innovation Regional Program); and in part by the Spanish Government through Grants ID2021-128327OA-I00 and TED2021-129374A-I00 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR

    Novel Bayesian Inference-Based Approach for the Uncertainty Characterization of Zhang's Camera Calibration Method

    Get PDF
    Camera calibration is necessary for many machine vision applications. The calibration methods are based on linear or non-linear optimization techniques that aim to find the best estimate of the camera parameters. One of the most commonly used methods in computer vision for the calibration of intrinsic camera parameters and lens distortion (interior orientation) is Zhang¿s method. Additionally, the uncertainty of the camera parameters is normally estimated by assuming that their variability can be explained by the images of the different poses of a checkerboard. However, the degree of reliability for both the best parameter values and their associated uncertainties has not yet been verified. Inaccurate estimates of intrinsic and extrinsic parameters during camera calibration may introduce additional biases in post-processing. This is why we propose a novel Bayesian inference-based approach that has allowed us to evaluate the degree of certainty of Zhang¿s camera calibration procedure. For this purpose, the a prioriprobability was assumed to be the one estimated by Zhang, and the intrinsic parameters were recalibrated by Bayesian inversion. The uncertainty of the intrinsic parameters was found to differ from the ones estimated with Zhang¿s method. However, the major source of inaccuracy is caused by the procedure for calculating the extrinsic parameters. The procedure used in the novel Bayesian inference-based approach significantly improves the reliability of the predictions of the image points, as it optimizes the extrinsic parameters.This work was supported by the Madrid Government (Comunidad de Madrid Spain) under the Multiannual Agreement with UC3M ("Fostering Young Doctors Research", APBI-CM-UC3M), and in the context of the VPRICIT (Research and Technological Innovation Regional Programme and by the FEDER/Ministry of Science and Innovation -Agencia Estatal de Investigacion (AEI) of the Government of Spain through the projects PID2022-136468OB-I00 and PID2022-142015OB-I00.Publicad

    Software Architecture for Autonomous and Coordinated Navigation of UAV Swarms in Forest and Urban Firefighting

    Get PDF
    Advances in the field of unmanned aerial vehicles (UAVs) have led to an exponential increase in their market, thanks to the development of innovative technological solutions aimed at a wide range of applications and services, such as emergencies and those related to fires. In addition, the expansion of this market has been accompanied by the birth and growth of the so-called UAV swarms. Currently, the expansion of these systems is due to their properties in terms of robustness, versatility, and efficiency. Along with these properties there is an aspect, which is still a field of study, such as autonomous and cooperative navigation of these swarms. In this paper we present an architecture that includes a set of complementary methods that allow the establishment of different control layers to enable the autonomous and cooperative navigation of a swarm of UAVs. Among the different layers, there are a global trajectory planner based on sampling, algorithms for obstacle detection and avoidance, and methods for autonomous decision making based on deep reinforcement learning. The paper shows satisfactory results for a line-of-sight based algorithm for global path planner trajectory smoothing in 2D and 3D. In addition, a novel method for autonomous navigation of UAVs based on deep reinforcement learning is shown, which has been tested in 2 different simulation environments with promising results about the use of these techniques to achieve autonomous navigation of UAVs.This work was supported by the Comunidad de Madrid Government through the Industrial Doctorates Grants (GRANT IND2017/TIC-7834)

    Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs

    Get PDF
    One of the most challenging problems in the domain of autonomous aerial vehicles is the designing of a robust real-time obstacle detection and avoidance system. This problem is complex, especially for the micro and small aerial vehicles, that is due to the Size, Weight and Power (SWaP) constraints. Therefore, using lightweight sensors (i.e., Digital camera) can be the best choice comparing with other sensors; such as laser or radar. For real-time applications, different works are based on stereo cameras in order to obtain a 3D model of the obstacles, or to estimate their depth. Instead, in this paper, a method that mimics the human behavior of detecting the collision state of the approaching obstacles using monocular camera is proposed. The key of the proposed algorithm is to analyze the size changes of the detected feature points, combined with the expansion ratios of the convex hull constructed around the detected feature points from consecutive frames. During the Aerial Vehicle (UAV) motion, the detection algorithm estimates the changes in the size of the area of the approaching obstacles. First, the method detects the feature points of the obstacles, then extracts the obstacles that have the probability of getting close toward the UAV. Secondly, by comparing the area ratio of the obstacle and the position of the UAV, the method decides if the detected obstacle may cause a collision. Finally, by estimating the obstacle 2D position in the image and combining with the tracked waypoints, the UAV performs the avoidance maneuver. The proposed algorithm was evaluated by performing real indoor and outdoor flights, and the obtained results show the accuracy of the proposed algorithm compared with other related works.Research supported by the Spanish Government through the Cicyt project ADAS ROAD-EYE (TRA2013-48314-C3-1-R)

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    Evaluating the acceptance of autonomous vehicles in the future

    Get PDF
    Proceeding of: 35th IEEE Intelligent Vehicles Symposium (IV 2023), 04-07 June 2023, Anchorage, AK, USA.The continuous advance of the automotive industry is leading to the emergence of more advanced driver assistance systems that enable the automation of certain tasks and that are undoubtedly aimed at achieving vehicles in which the driving task can be completely delegated. All these advances will bring changes in the paradigm of the automotive market, as is the case of insurance. For this reason, CESVIMAP and the Universidad Carlos III de Madrid are working on an Autonomous Testing pLatform for insurAnce reSearch (ATLAS) to study this technology and obtain first-hand knowledge about the responsibilities of each of the agents involved in the development of the vehicles of the future. This work gathers part of the advancements made in ATLAS, which have made it possible to have an autonomous vehicle with which to perform tests in real environments and demonstrations bringing the vehicle closer to future users. As a result of this work, and in collaboration with the Johannes Kepler University Linz, the impact, degree of acceptance and confidence of users in autonomous vehicles has been studied once they have taken a trip on board a fully autonomous vehicle such as ATLAS. This study has found that, while most users would be willing to use an autonomous vehicle, the same users are concerned about the use of this type of technology. Thus, understanding the reasons for this concern can help define the future of autonomous cars

    An Appearance-Based Tracking Algorithm for Aerial Search and Rescue Purposes

    Get PDF
    The automation of the Wilderness Search and Rescue (WiSAR) task aims for high levels of understanding of various scenery. In addition, working in unfriendly and complex environments may cause a time delay in the operation and consequently put human lives at stake. In order to address this problem, Unmanned Aerial Vehicles (UAVs), which provide potential support to the conventional methods, are used. These vehicles are provided with reliable human detection and tracking algorithms; in order to be able to find and track the bodies of the victims in complex environments, and a robust control system to maintain safe distances from the detected bodies. In this paper, a human detection based on the color and depth data captured from onboard sensors is proposed. Moreover, the proposal of computing data association from the skeleton pose and a visual appearance measurement allows the tracking of multiple people with invariance to the scale, translation and rotation of the point of view with respect to the target objects. The system has been validated with real and simulation experiments, and the obtained results show the ability to track multiple individuals even after long-term disappearances. Furthermore, the simulations present the robustness of the implemented reactive control system as a promising tool for assisting the pilot to perform approaching maneuvers in a safe and smooth manner.This research is supported by Madrid Community project SEGVAUTO 4.0 P2018/EMT-4362) and by the Spanish Government CICYT projects (TRA2015-63708-R and TRA2016-78886-C3-1-R), and Ministerio de Educación, Cultura y Deporte para la Formación de Profesorado Universitario (FPU14/02143). Also, we gratefully acknowledge the support of the NVIDIA Corporation with the donation of the GPUs used for this research

    A Research Platform for Autonomous Vehicles Technologies Research in the Insurance Sector

    Get PDF
    This article belongs to the Special Issue Intelligent Transportation SystemsThis work presents a novel platform for autonomous vehicle technologies research for the insurance sector. The platform has been collaboratively developed by the insurance company MAPFRE-CESVIMAP, Universidad Carlos III de Madrid and INSIA of the Universidad Politécnica de Madrid. The high-level architecture and several autonomous vehicle technologies developed using the framework of this collaboration are introduced and described in this work. Computer vision technologies for environment perception, V2X communication capabilities, enhanced localization, human–machine interaction and self awareness are among the technologies which have been developed and tested. Some use cases that validate the technologies presented in the platform are also presented; these use cases include public demonstrations, tests of the technologies and international competitions for self-driving technologies.Research was supported by the Spanish Government through the CICYT projects (TRA2016-78886-C3-1-R and RTI2018-096036-B-C21) and the Comunidad de Madrid through SEGVAUTO-4.0-CM (P2018/EMT-4362) and PEAVAUTO-CM-UC3M

    IVVI 2.0: An intelligent vehicle based on computational perception

    Get PDF
    This paper presents the IVVI 2.0 a smart research platform to foster intelligent systems in vehicles. Computational perception in intelligent transportation systems applications has advantages, such as huge data from vehicle environment, among others, so computer vision systems and laser scanners are the main devices that accomplish this task. Both have been integrated in our intelligent vehicle to develop cutting-edge applications to cope with perception difficulties, data processing algorithms, expert knowledge, and decision-making. The long-term in-vehicle applications, that are presented in this paper, outperform the most significant and fundamental technical limitations, such as, robustness in the face of changing environmental conditions. Our intelligent vehicle operates outdoors with pedestrians and others vehicles, and outperforms illumination variation, i.e.: shadows, low lighting conditions, night vision, among others. So, our applications ensure the suitable robustness and safety in case of a large variety of lighting conditions and complex perception tasks. Some of these complex tasks are overcome by the improvement of other devices, such as, inertial measurement units or differential global positioning systems, or perception architectures that accomplish sensor fusion processes in an efficient and safe manner. Both extra devices and architectures enhance the accuracy of computational perception and outreach the properties of each device separately.This work was supported by the Spanish Government through the CICYT projects (GRANT TRA2010 20225 C03 01) and (GRANT TRA 2011 29454 C03 02)
    corecore